Goto

Collaborating Authors

 part 5


A Theoretical Insight into Attack and Defense of Gradient Leakage in Transformer

Li, Chenyang, Song, Zhao, Wang, Weixin, Yang, Chiwun

arXiv.org Artificial Intelligence

The Deep Leakage from Gradient (DLG) attack has emerged as a prevalent and highly effective method for extracting sensitive training data by inspecting exchanged gradients. This approach poses a substantial threat to the privacy of individuals and organizations alike. This research presents a comprehensive analysis of the gradient leakage method when applied specifically to transformer-based models. Through meticulous examination, we showcase the capability to accurately recover data solely from gradients and rigorously investigate the conditions under which gradient attacks can be executed, providing compelling evidence. Furthermore, we reevaluate the approach of introducing additional noise on gradients as a protective measure against gradient attacks. To address this, we outline a theoretical proof that analyzes the associated privacy costs within the framework of differential privacy. Additionally, we affirm the convergence of the Stochastic Gradient Descent (SGD) algorithm under perturbed gradients. The primary objective of this study is to augment the understanding of gradient leakage attack and defense strategies while actively contributing to the development of privacy-preserving techniques specifically tailored for transformer-based models. By shedding light on the vulnerabilities and countermeasures associated with gradient leakage, this research aims to foster advancements in safeguarding sensitive data and upholding privacy in the context of transformer-based models.


Visual Perception for Self-Driving Cars! Part 5: Multi-Task Learning

#artificialintelligence

This article is part of series. Check out the full series: Part 1, Part 2, Part 3, Part 4, Part5, Part6! So far we have considered segmentation, detection, and tracking challenges for our self-driving cars. Today, we are going to talk about the camera to bird's eye view solution to understand our environment and make the better decision to drive the vehicle automatically. So let's start with the problem definition!


The development of cyber warfare in the US – part 5

#artificialintelligence

… DevSecOps (Development Security Operations) and AI/Machine Learning. … cyber intelligence to reduce the effectiveness of enemy operations.


Part 5: Intelligence - A process to change the composition of SpaceTime

#artificialintelligence

"A fundamental problem in artificial intelligence is that nobody really knows what intelligence is." That's the opening sentence of "Universal Intelligence: A Definition of Machine Intelligence" (link) authored by Shane Legg and Marcus Hutter. If those names sound familiar, they are. Legg is a co-founder of DeepMind and Hutter is a senior scientist at DeepMind - two very well accomplished individuals with a long track record researching artificial general intelligence. While both have done exemplary work in this field, this paper, in my opinion, is poor.


APOSTLE TALK - Future News Now! : THERE'S MORE THAN ARTIFICIAL INTELLIGENCE - PART 5

#artificialintelligence

PURPOSE OF THIS TEACHING As with the previous 4 sessions, our attempt is to both present the advantages of Artificial Intelligence, while at the same time presenting the potential EVIL resident in this very science ... especially in the hands of EVIL people with EVIL motives. Computer scientist Bill Joy, and many other writers, have identified cluster groups of technological advances that they esteem critical to the future of humanity. Joy warns that these advances have potential to be used by "elites" for either good or evil. ETHICAL AND MORAL QUESTIONS We asked some questions in Part 1, Part 2, Part 3 and Part 4 concerning valid bio-ethics and moral questions. Now … think about Artificial Intelligence with its usage of Nanotechnology from a human perspective.


My 3 months with Computer Vision -- Part 5 -- Transfer Learning for Stanford Dog Dataset

#artificialintelligence

Let's start with the 3rd Project -- Stanford Dog Dataset. This dataset asks you to identify dogs of 120 different breeds. We can go with our previous approach. But that will take a lot of computation and a lot of time. Let's introduce a new concept then.


Machine Learning Exercises In Python, Part 5

#artificialintelligence

This post is part of a series covering the exercises from Andrew Ng's machine learning class on Coursera. The original code, exercise text, and data files for this post are available here. In part four we wrapped up our implementation of logistic regression by extending our solution to handle multi-class classification and testing it on the hand-written digits data set. Using just logistic regression we were able to hit a classification accuracy of about 97.5%, which is reasonably good but pretty much maxes out what we can achieve with a linear model. In this blog post we'll again tackle the hand-written digits data set, but this time using a feed-forward neural network with backpropagation.


Understanding Python: Part 5

#artificialintelligence

After having seen loops and conditional statements in the previous article, we now move forward with the set of topics called "Fantastic Four". This includes functions(built-in, lambda, recursive) and list comprehension which constitutes the core of Python programming. Let us see each of the concepts in detail with relevant examples. Functions are broadly divided into built-in, user-defined, and lambda(anonymous). Each of the categories is self-explanatory except the lambda functions which can be used on the go as opposed to the conventional user-defined functions.


The Computer Vision Pipeline, Part 5: Classifier learning algorithms and conclusion

#artificialintelligence

This articles series was designed to give you a 30,000 feet overview on computer vision systems and their applications. I don't expect you to have a deep understanding on the pipeline components yet.